Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
1.
Npj Ment Health Res ; 3(1): 12, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38609507

RESUMO

Large language models (LLMs) such as Open AI's GPT-4 (which power ChatGPT) and Google's Gemini, built on artificial intelligence, hold immense potential to support, augment, or even eventually automate psychotherapy. Enthusiasm about such applications is mounting in the field as well as industry. These developments promise to address insufficient mental healthcare system capacity and scale individual access to personalized treatments. However, clinical psychology is an uncommonly high stakes application domain for AI systems, as responsible and evidence-based therapy requires nuanced expertise. This paper provides a roadmap for the ambitious yet responsible application of clinical LLMs in psychotherapy. First, a technical overview of clinical LLMs is presented. Second, the stages of integration of LLMs into psychotherapy are discussed while highlighting parallels to the development of autonomous vehicle technology. Third, potential applications of LLMs in clinical care, training, and research are discussed, highlighting areas of risk given the complex nature of psychotherapy. Fourth, recommendations for the responsible development and evaluation of clinical LLMs are provided, which include centering clinical science, involving robust interdisciplinary collaboration, and attending to issues like assessment, risk detection, transparency, and bias. Lastly, a vision is outlined for how LLMs might enable a new generation of studies of evidence-based interventions at scale, and how these studies may challenge assumptions about psychotherapy.

2.
PLoS One ; 19(4): e0300932, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38625926

RESUMO

The COVID pandemic placed a spotlight on alcohol use and the hardships of working within the food and beverage industry, with millions left jobless. Following previous studies that have found elevated rates of alcohol problems among bartenders and servers, here we studied the alcohol use of bartenders and servers who were employed during COVID. From February 12-June 16, 2021, in the midst of the U.S. COVID national emergency declaration, survey data from 1,010 employed bartender and servers were analyzed to quantify rates of excessive or hazardous drinking along with regression predictors of alcohol use as assessed by the 10-item Alcohol Use Disorders Identification Test (AUDIT). Findings indicate that more than 2 out of 5 (44%) people surveyed reported moderate or high rates of alcohol problem severity (i.e., AUDIT scores of 8 or higher)-a rate 4 to 6 times that of the heavy alcohol use rate reported pre- or mid-pandemic by adults within and outside the industry. Person-level factors (gender, substance use, mood) along with the drinking habits of one's core social group were significantly associated with alcohol use. Bartenders and servers reported surprisingly high rates of alcohol problem severity and experienced risk factors for hazardous drinking at multiple ecological levels. Being a highly vulnerable and understudied population, more studies on bartenders and servers are needed to assess and manage the true toll of alcohol consumption for industry employees.


Assuntos
Transtornos Relacionados ao Uso de Álcool , Alcoolismo , COVID-19 , Adulto , Humanos , Consumo de Bebidas Alcoólicas/epidemiologia , COVID-19/epidemiologia , Fatores de Risco
3.
Psychiatry Res ; 333: 115667, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38290286

RESUMO

In this narrative review, we survey recent empirical evaluations of AI-based language assessments and present a case for the technology of large language models to be poised for changing standardized psychological assessment. Artificial intelligence has been undergoing a purported "paradigm shift" initiated by new machine learning models, large language models (e.g., BERT, LAMMA, and that behind ChatGPT). These models have led to unprecedented accuracy over most computerized language processing tasks, from web searches to automatic machine translation and question answering, while their dialogue-based forms, like ChatGPT have captured the interest of over a million users. The success of the large language model is mostly attributed to its capability to numerically represent words in their context, long a weakness of previous attempts to automate psychological assessment from language. While potential applications for automated therapy are beginning to be studied on the heels of chatGPT's success, here we present evidence that suggests, with thorough validation of targeted deployment scenarios, that AI's newest technology can move mental health assessment away from rating scales and to instead use how people naturally communicate, in language.


Assuntos
Inteligência Artificial , Idioma , Humanos , Aprendizado de Máquina
4.
Emotion ; 24(1): 106-115, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37199938

RESUMO

Many scholars have proposed that feeling what we believe others are feeling-often known as "empathy"-is essential for other-regarding sentiments and plays an important role in our moral lives. Caring for and about others (without necessarily sharing their feelings)-often known as "compassion"-is also frequently discussed as a relevant force for prosocial motivation and action. Here, we explore the relationship between empathy and compassion using the methods of computational linguistics. Analyses of 2,356,916 Facebook posts suggest that individuals (N = 2,781) high in empathy use different language than those high in compassion, after accounting for shared variance between these constructs. Empathic people, controlling for compassion, often use self-focused language and write about negative feelings, social isolation, and feeling overwhelmed. Compassionate people, controlling for empathy, often use other-focused language and write about positive feelings and social connections. In addition, high empathy without compassion is related to negative health outcomes, while high compassion without empathy is related to positive health outcomes, positive lifestyle choices, and charitable giving. Such findings favor an approach to moral motivation that is grounded in compassion rather than empathy. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Emoções , Empatia , Humanos , Motivação , Princípios Morais , Linguística
5.
Artigo em Inglês | MEDLINE | ID: mdl-38125747

RESUMO

Full national coverage below the state level is difficult to attain through survey-based data collection. Even the largest survey-based data collections, such as the CDC's Behavioral Risk Factor Surveillance System or the Gallup-Healthways Well-being Index (both with more than 300,000 responses p.a.) only allow for the estimation of annual averages for about 260 out of roughly U.S. 3,000 counties when a threshold of 300 responses per county is used. Using a relatively high threshold of 300 responses gives substantially higher convergent validity-higher correlations with health variables-than lower thresholds but covers a reduced and biased sample of the population. We present principled methods to interpolate spatial estimates and show that including large-scale geotagged social media data can increase interpolation accuracy. In this work, we focus on Gallup-reported life satisfaction, a widely-used measure of subjective well-being. We use Gaussian Processes (GP), a formal Bayesian model, to interpolate life satisfaction, which we optimally combine with estimates from low-count data. We interpolate over several spaces (geographic and socioeconomic) and extend these evaluations to the space created by variables encoding language frequencies of approximately 6 million geotagged Twitter users. We find that Twitter language use can serve as a rough aggregate measure of socioeconomic and cultural similarity, and improves upon estimates derived from a wide variety of socioeconomic, demographic, and geographic similarity measures. We show that applying Gaussian Processes to the limited Gallup data allows us to generate estimates for a much larger number of counties while maintaining the same level of convergent validity with external criteria (i.e., N = 1,133 vs. 2,954 counties). This work suggests that spatial coverage of psychological variables can be reliably extended through Bayesian techniques while maintaining out-of-sample prediction accuracy and that Twitter language adds important information about cultural similarity over and above traditional socio-demographic and geographic similarity measures. Finally, to facilitate the adoption of these methods, we have also open-sourced an online tool that researchers can freely use to interpolate their data across geographies.

6.
Sci Rep ; 13(1): 9027, 2023 06 03.
Artigo em Inglês | MEDLINE | ID: mdl-37270657

RESUMO

Opioid poisoning mortality is a substantial public health crisis in the United States, with opioids involved in approximately 75% of the nearly 1 million drug related deaths since 1999. Research suggests that the epidemic is driven by both over-prescribing and social and psychological determinants such as economic stability, hopelessness, and isolation. Hindering this research is a lack of measurements of these social and psychological constructs at fine-grained spatial and temporal resolutions. To address this issue, we use a multi-modal data set consisting of natural language from Twitter, psychometric self-reports of depression and well-being, and traditional area-based measures of socio-demographics and health-related risk factors. Unlike previous work using social media data, we do not rely on opioid or substance related keywords to track community poisonings. Instead, we leverage a large, open vocabulary of thousands of words in order to fully characterize communities suffering from opioid poisoning, using a sample of 1.5 billion tweets from 6 million U.S. county mapped Twitter users. Results show that Twitter language predicted opioid poisoning mortality better than factors relating to socio-demographics, access to healthcare, physical pain, and psychological well-being. Additionally, risk factors revealed by the Twitter language analysis included negative emotions, discussions of long work hours, and boredom, whereas protective factors included resilience, travel/leisure, and positive emotions, dovetailing with results from the psychometric self-report data. The results show that natural language from public social media can be used as a surveillance tool for both predicting community opioid poisonings and understanding the dynamic social and psychological nature of the epidemic.


Assuntos
Mídias Sociais , Humanos , Estados Unidos/epidemiologia , Analgésicos Opioides , Autorrelato , Idioma , Ansiedade
7.
Psychol Methods ; 28(6): 1478-1498, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37126041

RESUMO

The language that individuals use for expressing themselves contains rich psychological information. Recent significant advances in Natural Language Processing (NLP) and Deep Learning (DL), namely transformers, have resulted in large performance gains in tasks related to understanding natural language. However, these state-of-the-art methods have not yet been made easily accessible for psychology researchers, nor designed to be optimal for human-level analyses. This tutorial introduces text (https://r-text.org/), a new R-package for analyzing and visualizing human language using transformers, the latest techniques from NLP and DL. The text-package is both a modular solution for accessing state-of-the-art language models and an end-to-end solution catered for human-level analyses. Hence, text provides user-friendly functions tailored to test hypotheses in social sciences for both relatively small and large data sets. The tutorial describes methods for analyzing text, providing functions with reliable defaults that can be used off-the-shelf as well as providing a framework for the advanced users to build on for novel pipelines. The reader learns about three core methods: (1) textEmbed(): to transform text to modern transformer-based word embeddings; (2) textTrain() and textPredict(): to train predictive models with embeddings as input, and use the models to predict from; (3) textSimilarity() and textDistance(): to compute semantic similarity/distance scores between texts. The reader also learns about two extended methods: (1) textProjection()/textProjectionPlot() and (2) textCentrality()/textCentralityPlot(): to examine and visualize text within the embedding space. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Idioma , Processamento de Linguagem Natural , Humanos , Semântica , Ciências Sociais
8.
Neuropsychopharmacology ; 48(11): 1579-1585, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37095253

RESUMO

The reoccurrence of use (relapse) and treatment dropout is frequently observed in substance use disorder (SUD) treatment. In the current paper, we evaluated the predictive capability of an AI-based digital phenotype using the social media language of patients receiving treatment for substance use disorders (N = 269). We found that language phenotypes outperformed a standard intake psychometric assessment scale when predicting patients' 90-day treatment outcomes. We also use a modern deep learning-based AI model, Bidirectional Encoder Representations from Transformers (BERT) to generate risk scores using pre-treatment digital phenotype and intake clinic data to predict dropout probabilities. Nearly all individuals labeled as low-risk remained in treatment while those identified as high-risk dropped out (risk score for dropout AUC = 0.81; p < 0.001). The current study suggests the possibility of utilizing social media digital phenotypes as a new tool for intake risk assessment to identify individuals most at risk of treatment dropout and relapse.


Assuntos
Comportamento Aditivo , Mídias Sociais , Transtornos Relacionados ao Uso de Substâncias , Humanos , Comportamento Aditivo/terapia , Transtornos Relacionados ao Uso de Substâncias/terapia , Pacientes Desistentes do Tratamento , Fatores de Risco
10.
NPJ Digit Med ; 6(1): 35, 2023 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-36882633

RESUMO

Targeting of location-specific aid for the U.S. opioid epidemic is difficult due to our inability to accurately predict changes in opioid mortality across heterogeneous communities. AI-based language analyses, having recently shown promise in cross-sectional (between-community) well-being assessments, may offer a way to more accurately longitudinally predict community-level overdose mortality. Here, we develop and evaluate, TROP (Transformer for Opiod Prediction), a model for community-specific trend projection that uses community-specific social media language along with past opioid-related mortality data to predict future changes in opioid-related deaths. TOP builds on recent advances in sequence modeling, namely transformer networks, to use changes in yearly language on Twitter and past mortality to project the following year's mortality rates by county. Trained over five years and evaluated over the next two years TROP demonstrated state-of-the-art accuracy in predicting future county-specific opioid trends. A model built using linear auto-regression and traditional socioeconomic data gave 7% error (MAPE) or within 2.93 deaths per 100,000 people on average; our proposed architecture was able to forecast yearly death rates with less than half that error: 3% MAPE and within 1.15 per 100,000 people.

11.
Psychol Med ; 53(3): 918-926, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-34154682

RESUMO

BACKGROUND: Oral histories from 9/11 responders to the World Trade Center (WTC) attacks provide rich narratives about distress and resilience. Artificial Intelligence (AI) models promise to detect psychopathology in natural language, but they have been evaluated primarily in non-clinical settings using social media. This study sought to test the ability of AI-based language assessments to predict PTSD symptom trajectories among responders. METHODS: Participants were 124 responders whose health was monitored at the Stony Brook WTC Health and Wellness Program who completed oral history interviews about their initial WTC experiences. PTSD symptom severity was measured longitudinally using the PTSD Checklist (PCL) for up to 7 years post-interview. AI-based indicators were computed for depression, anxiety, neuroticism, and extraversion along with dictionary-based measures of linguistic and interpersonal style. Linear regression and multilevel models estimated associations of AI indicators with concurrent and subsequent PTSD symptom severity (significance adjusted by false discovery rate). RESULTS: Cross-sectionally, greater depressive language (ß = 0.32; p = 0.049) and first-person singular usage (ß = 0.31; p = 0.049) were associated with increased symptom severity. Longitudinally, anxious language predicted future worsening in PCL scores (ß = 0.30; p = 0.049), whereas first-person plural usage (ß = -0.36; p = 0.014) and longer words usage (ß = -0.35; p = 0.014) predicted improvement. CONCLUSIONS: This is the first study to demonstrate the value of AI in understanding PTSD in a vulnerable population. Future studies should extend this application to other trauma exposures and to other demographic groups, especially under-represented minorities.


Assuntos
Socorristas , Ataques Terroristas de 11 de Setembro , Transtornos de Estresse Pós-Traumáticos , Humanos , Transtornos de Estresse Pós-Traumáticos/diagnóstico , Transtornos de Estresse Pós-Traumáticos/epidemiologia , Inteligência Artificial , Linguística
12.
AJPM Focus ; 2(1): 100062, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36573174

RESUMO

Introduction: Although surveys are a well-established instrument to capture the population prevalence of mental health at a moment in time, public Twitter is a continuously available data source that can provide a broader window into population mental health. We characterized the relationship between COVID-19 case counts, stay-at-home orders because of COVID-19, and anxiety and depression in 7 major U.S. cities utilizing Twitter data. Methods: We collected 18 million Tweets from January to September 2019 (baseline) and 2020 from 7 U.S. cities with large populations and varied COVID-19 response protocols: Atlanta, Chicago, Houston, Los Angeles, Miami, New York, and Phoenix. We applied machine learning‒based language prediction models for depression and anxiety validated in previous work with Twitter data. As an alternative public big data source, we explored Google Trends data using search query frequencies. A qualitative evaluation of trends is presented. Results: Twitter depression and anxiety scores were consistently elevated above their 2019 baselines across all the 7 locations. Twitter depression scores increased during the early phase of the pandemic, with a peak in early summer and a subsequent decline in late summer. The pattern of depression trends was aligned with national COVID-19 case trends rather than with trends in individual states. Anxiety was consistently and steadily elevated throughout the pandemic. Google search trends data showed noisy and inconsistent results. Conclusions: Our study shows the feasibility of using Twitter to capture trends of depression and anxiety during the COVID-19 public health crisis and suggests that social media data can supplement survey data to monitor long-term mental health trends.

13.
Psychol Med ; 53(13): 6205-6211, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-36377499

RESUMO

BACKGROUND: This study leveraged machine learning to evaluate the contribution of information from multiple developmental stages to prospective prediction of depression and anxiety in mid-adolescence. METHODS: A community sample (N = 374; 53.5% male) of children and their families completed tri-annual assessments across ages 3-15. The feature set included several important risk factors spanning psychopathology, temperament/personality, family environment, life stress, interpersonal relationships, neurocognitive, hormonal, and neural functioning, and parental psychopathology and personality. We used canonical correlation analysis (CCA) to reduce the large feature set to a lower dimensional space while preserving the longitudinal structure of the data. Ablation analysis was conducted to evaluate the relative contributions to prediction of information gathered at different developmental periods and relative to previous disorder status (i.e. age 12 depression or anxiety) and demographics (sex, race, ethnicity). RESULTS: CCA components from individual waves predicted age 15 disorder status better than chance across ages 3, 6, 9, and 12 for anxiety and 9 and 12 for depression. Only the components from age 12 for depression, and ages 9 and 12 for anxiety, improved prediction over prior disorder status and demographics. CONCLUSIONS: These findings suggest that screening for risk of adolescent depression can be successful as early as age 9, while screening for risk of adolescent anxiety can be successful as early as age 3. Assessing additional risk factors at age 12 for depression, and going back to age 9 for anxiety, can improve screening for risk at age 15 beyond knowing standard demographics and disorder history.


Assuntos
Transtornos de Ansiedade , Depressão , Criança , Humanos , Masculino , Adolescente , Pré-Escolar , Feminino , Depressão/diagnóstico , Estudos Prospectivos , Transtornos de Ansiedade/diagnóstico , Transtornos de Ansiedade/epidemiologia , Ansiedade/diagnóstico , Psicopatologia , Estudos Longitudinais
14.
Proc Int AAAI Conf Weblogs Soc Media ; 16(1): 228-240, 2022 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-36467573

RESUMO

Social media is increasingly used for large-scale population predictions, such as estimating community health statistics. However, social media users are not typically a representative sample of the intended population - a "selection bias". Within the social sciences, such a bias is typically addressed with restratification techniques, where observations are reweighted according to how under- or over-sampled their socio-demographic groups are. Yet, restratifaction is rarely evaluated for improving prediction. In this two-part study, we first evaluate standard, "out-of-the-box" restratification techniques, finding they provide no improvement and often even degraded prediction accuracies across four tasks of esimating U.S. county population health statistics from Twitter. The core reasons for degraded performance seem to be tied to their reliance on either sparse or shrunken estimates of each population's socio-demographics. In the second part of our study, we develop and evaluate Robust Poststratification, which consists of three methods to address these problems: (1) estimator redistribution to account for shrinking, as well as (2) adaptive binning and (3) informed smoothing to handle sparse socio-demographic estimates. We show that each of these methods leads to significant improvement in prediction accuracies over the standard restratification approaches. Taken together, Robust Poststratification enables state-of-the-art prediction accuracies, yielding a 53.0% increase in variance explained (R 2) in the case of surveyed life satisfaction, and a 17.8% average increase across all tasks.

15.
Am J Drug Alcohol Abuse ; 48(5): 573-585, 2022 09 03.
Artigo em Inglês | MEDLINE | ID: mdl-35853250

RESUMO

Background: Early indicators of who will remain in - or leave - treatment for substance use disorder (SUD) can drive targeted interventions to support long-term recovery.Objectives: To conduct a comprehensive study of linguistic markers of SUD treatment outcomes, the current study integrated features produced by machine learning models known to have social-psychology relevance.Methods: We extracted and analyzed linguistic features from participants' Facebook posts (N = 206, 39.32% female; 55,415 postings) over the two years before they entered a SUD treatment program. Exploratory features produced by both Linguistic Inquiry and Word Count (LIWC) and Latent Dirichlet Allocation (LDA) topic modeling and the features from theoretical domains of religiosity, affect, and temporal orientation via established AI-based linguistic models were utilized.Results: Patients who stayed in the SUD treatment for over 90 days used more words associated with religion, positive emotions, family, affiliations, and the present, and used more first-person singular pronouns (Cohen's d values: [-0.39, -0.57]). Patients who discontinued their treatment before 90 days discussed more diverse topics, focused on the past, and used more articles (Cohen's d values: [0.44, 0.57]). All ps < .05 with Benjamini-Hochberg False Discovery Rate correction.Conclusions: We confirmed the literature on protective and risk social-psychological factors linking to SUD treatment in language analysis, showing that Facebook language before treatment entry could be used to identify the markers of SUD treatment outcomes. This reflects the importance of taking these linguistic features and markers into consideration when designing and recommending SUD treatment plans.


Assuntos
Mídias Sociais , Transtornos Relacionados ao Uso de Substâncias , Feminino , Humanos , Idioma , Linguística , Masculino , Transtornos Relacionados ao Uso de Substâncias/terapia
16.
Sci Rep ; 12(1): 3918, 2022 03 10.
Artigo em Inglês | MEDLINE | ID: mdl-35273198

RESUMO

We show that using a recent break-through in artificial intelligence -transformers-, psychological assessments from text-responses can approach theoretical upper limits in accuracy, converging with standard psychological rating scales. Text-responses use people's primary form of communication -natural language- and have been suggested as a more ecologically-valid response format than closed-ended rating scales that dominate social science. However, previous language analysis techniques left a gap between how accurately they converged with standard rating scales and how well ratings scales converge with themselves - a theoretical upper-limit in accuracy. Most recently, AI-based language analysis has gone through a transformation as nearly all of its applications, from Web search to personalized assistants (e.g., Alexa and Siri), have shown unprecedented improvement by using transformers. We evaluate transformers for estimating psychological well-being from questionnaire text- and descriptive word-responses, and find accuracies converging with rating scales that approach the theoretical upper limits (Pearson r = 0.85, p < 0.001, N = 608; in line with most metrics of rating scale reliability). These findings suggest an avenue for modernizing the ubiquitous questionnaire and ultimately opening doors to a greater understanding of the human condition.


Assuntos
Inteligência Artificial , Idioma , Comunicação , Humanos , Reprodutibilidade dos Testes , Inquéritos e Questionários
17.
PLoS One ; 17(2): e0264280, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35196353

RESUMO

In March 2020, residents of the Bronx, New York experienced one of the first significant community COVID-19 outbreaks in the United States. Focusing on intensive longitudinal data from 78 Bronx-based older adults, we used a multi-method approach to (1) examine 2019 to early pandemic (February-June 2020) changes in momentary psychological well-being of Einstein Aging Study (EAS) participants and (2) to contextualize these changes with community distress scores collected from public Twitter posts posted in Bronx County. We found increases in mean loneliness from 2019 to 2020; and participants that were higher in neuroticism had greater increases in thought unpleasantness and feeling depressed. Twitter-based Bronx community scores of anxiety, depressivity, and negatively-valenced affect showed elevated levels in 2020 weeks relative to 2019. Integration of EAS participant data and community data showed week-to-week fluctuations across 2019 and 2020. Results highlight how community-level data can characterize a rapidly changing environment to supplement individual-level data at no additional burden to individual participants.


Assuntos
Ansiedade/patologia , COVID-19/epidemiologia , Depressão/patologia , Solidão , Mídias Sociais , Afeto , Idoso , Idoso de 80 Anos ou mais , COVID-19/virologia , Feminino , Humanos , Estudos Longitudinais , Masculino , New York/epidemiologia , Pandemias , SARS-CoV-2/isolamento & purificação
18.
J Pers ; 90(3): 405-425, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34536229

RESUMO

OBJECTIVE: We explore the personality of counties as assessed through linguistic patterns on social media. Such studies were previously limited by the cost and feasibility of large-scale surveys; however, language-based computational models applied to large social media datasets now allow for large-scale personality assessment. METHOD: We applied a language-based assessment of the five factor model of personality to 6,064,267 U.S. Twitter users. We aggregated the Twitter-based personality scores to 2,041 counties and compared to political, economic, social, and health outcomes measured through surveys and by government agencies. RESULTS: There was significant personality variation across counties. Openness to experience was higher on the coasts, conscientiousness was uniformly spread, extraversion was higher in southern states, agreeableness was higher in western states, and emotional stability was highest in the south. Across 13 outcomes, language-based personality estimates replicated patterns that have been observed in individual-level and geographic studies. This includes higher Republican vote share in less agreeable counties and increased life satisfaction in more conscientious counties. CONCLUSIONS: Results suggest that regions vary in their personality and that these differences can be studied through computational linguistic analysis of social media. Furthermore, these methods may be used to explore other psychological constructs across geographies.


Assuntos
Mídias Sociais , Extroversão Psicológica , Humanos , Idioma , Personalidade , Determinação da Personalidade
19.
Proc Int AAAI Conf Weblogs Soc Media ; 16: 1064-1074, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38098765

RESUMO

How we perceive our surrounding world impacts how we live in and react to it. In this study, we propose LaBel (Latent Beliefs Model), an alternative to topic modeling that uncovers latent semantic dimensions from transformer-based embeddings and enables their representation as generated phrases rather than word lists. We use LaBel to explore the major beliefs that humans have about the world and other prevalent domains, such as education or parenting. Although human beliefs have been explored in previous works, our proposed model helps automate the exploring process to rely less on human experts, saving time and manual efforts, especially when working with large corpus data. Our approach to LaBel uses a novel modification of autoregressive transformers to effectively generate texts conditioning on a vector input format. Differently from topic modeling methods, our generated texts (e.g. "the world is truly in your favor") are discourse segments rather than word lists, which helps convey semantics in a more natural manner with full context. We evaluate LaBel dimensions using both an intrusion task as well as a classification task of identifying categories of major beliefs in tweets finding greater accuracies than popular topic modeling approaches.

20.
J Psychiatr Res ; 143: 239-245, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34509091

RESUMO

BACKGROUND: Recent research on artificial intelligence has demonstrated that natural language can be used to provide valid indicators of psychopathology. The present study examined artificial intelligence-based language predictors (ALPs) of seven trauma-related mental and physical health outcomes in responders to the World Trade Center disaster. METHODS: The responders (N = 174, Mage = 55.4 years) provided daily voicemail updates over 14 days. Algorithms developed using machine learning in large social media discovery samples were applied to the voicemail transcriptions to derive ALP scores for several risk factors (depressivity, anxiousness, anger proneness, stress, and personality). Responders also completed self-report assessments of these risk factors at baseline and trauma-related mental and physical health outcomes at two-year follow-up (including symptoms of depression, posttraumatic stress disorder, sleep disturbance, respiratory problems, and GERD). RESULTS: Voicemail ALPs were significantly associated with a majority of the trauma-related outcomes at two-year follow-up, over and above corresponding baseline self-reports. ALPs showed significant convergence with corresponding self-report scales, but also considerable uniqueness from each other and from self-report scales. LIMITATIONS: The study has a relatively short follow-up period relative to trauma occurrence and a limited sample size. CONCLUSIONS: This study shows evidence that ALPs may provide a novel, objective, and clinically useful approach to forecasting, and may in the future help to identify individuals at risk for negative health outcomes.


Assuntos
Desastres , Transtornos de Estresse Pós-Traumáticos , Ansiedade , Inteligência Artificial , Humanos , Idioma , Pessoa de Meia-Idade , Transtornos de Estresse Pós-Traumáticos/epidemiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA